Fractional Newton–Raphson Method Accelerated with Aitken’s Method

نویسندگان

چکیده

In the following paper, we present a way to accelerate speed of convergence fractional Newton–Raphson (F N–R) method, which seems have an order at least linearly for case in α derivative is different from one. A simplified constructing Riemann–Liouville (R–L) operators, integral and presented along with examples its application on functions. Furthermore, introduction Aitken’s method made it explained why has ability iterative methods, finally results that were obtained when implementing F N–R where shown converges faster than simple N–R.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerated Linearized Bregman Method

In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ε) iterations to obtain an ...

متن کامل

Accelerated Overrelaxation Method

This paper describes a method for the numerical solution of linear systems of equations. The method is a two-parameter generalization of the Successive Overrelaxation (SOR) method such that when the two parameters involved are equal it coincides with the SOR method. Finally, a numerical example is given to show the superiority of the new method.

متن کامل

Optimum Accelerated Overrelaxation Method

In this paper we give the optimum parameters for the Accelerated Overrelaxation (AOR) method in the special case where the matrix coefficient of the linear system, which is solved, is consistently ordered with nonvanishing diagonal elements. Under certain assumptions, concerning the eigenvalues of the corresponding Jacobi matrix, it is shown that the optimum AOR method gives better convergence ...

متن کامل

Accelerated Adaptive Integration Method

Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AI...

متن کامل

Stochastic gradient method with accelerated stochastic dynamics

In this paper, we propose a novel technique to implement stochastic gradient methods, which are beneficial for learning from large datasets, through accelerated stochastic dynamics. A stochastic gradient method is based on mini-batch learning for reducing the computational cost when the amount of data is large. The stochasticity of the gradient can be mitigated by the injection of Gaussian nois...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Axioms

سال: 2021

ISSN: ['2075-1680']

DOI: https://doi.org/10.3390/axioms10020047